skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zhang"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Interactive notebook programming is universal in modern ML and AI workflows, with interactive deep learning training (IDLT) emerging as a dominant use case. To ensure responsiveness, platforms like Jupyter and Colab reserve GPUs for long-running notebook sessions, despite their intermittent and sporadic GPU usage, leading to extremely low GPU utilization and prohibitively high costs. In this paper, we introduce NotebookOS, a GPU-efficient notebook platform tailored for the unique requirements of IDLT. NotebookOS employs replicated notebook kernels with Raft-synchronized replicas distributed across GPU servers. To optimize GPU utilization, NotebookOS oversubscribes server resources, leveraging high inter-arrival times in IDLT workloads, and allocates GPUs only during active cell execution. It also supports replica migration and automatic cluster scaling under high load. Altogether, this design enables interactive training with minimal delay. In evaluation on production workloads, NotebookOS saved over 1,187 GPU hours in 17.5 hours of real-world IDLT, while significantly improving interactivity. 
    more » « less
    Free, publicly-accessible full text available March 22, 2027
  2. Large deployable mesh reflectors play a critical role in satellite communications, Earth observation, and deep-space exploration, offering high-gain antenna performance through precisely shaped reflective surfaces. Traditional dynamic modeling approaches—such as wave-based and finite element methods—often struggle to accurately capture the complex behavior of three-dimensional reflectors due to oversimplifications of cable members. To address these challenges, this paper proposes a novel spatial discretization framework that systematically decomposes cable member displacements into boundary-induced and internal components in a global Cartesian coordinate system. The framework derives a system of ordinary differential equations for each cable member by enforcing the Lagrange’s equations, capturing both longitudinal and transverse internal displacement of the cable member. Numerical simulations of a two-dimensional cable-network structure and a center-feed parabolic deployable mesh reflector with 101 nodes illustrate the improved accuracy of the proposed method in predicting vibration characteristics across a broad frequency range. Compared to standard finite element analysis, the proposed method more effectively identifies both low- and high-frequency modes and offers robust convergence and accurate prediction for both frequency and transient responses of the structure. This enhanced predictive capability underscores the significance of incorporating internal cable member displacements for reliable dynamic modeling of large deployable mesh reflectors, ultimately informing better design, control, and on-orbit performance of future space-based reflector systems. 
    more » « less
    Free, publicly-accessible full text available February 1, 2027
  3. Free, publicly-accessible full text available January 1, 2027
  4. Free, publicly-accessible full text available January 1, 2027
  5. Free, publicly-accessible full text available December 1, 2026
  6. Free, publicly-accessible full text available November 10, 2026
  7. Free, publicly-accessible full text available November 10, 2026
  8. Vision graph neural networks (ViG) have demonstrated promise in vision tasks as a competitive alternative to conventional convolutional neural nets (CNN) and transformers (ViTs); however, common graph construction methods, such as k-nearest neighbor (KNN), can be expensive on larger images. While methods such as Sparse Vision Graph Attention (SVGA) have shown promise, SVGA’s fixed step scale can lead to over-squashing and missing multiple connections to gain the same information that could be gained from a long-range link. Through this observation, we propose a new graph construction method, Logarithmic Scalable Graph Construction (LSGC) to enhance performance by limiting the number of long-range links. To this end, we propose LogViG, a novel hybrid CNN-GNN model that utilizes LSGC. Furthermore, inspired by the successes of multiscale and high-resolution architectures, we introduce and apply a high-resolution branch and fuse features between our high-resolution and low-resolution branches for a multi-scale high-resolution Vision GNN network. Extensive experiments show that LogViG beats existing ViG, CNN, and ViT architectures in terms of accuracy, GMACs, and parameters on image classification and semantic segmentation tasks. Our smallest model, Ti-LogViG, achieves an average top-1 accuracy on ImageNet-1K of 79.9% with a standard deviation of ± 0.2%, 1.7% higher average accuracy than Vision GNN with a 24.3% reduction in parameters and 35.3% reduction in GMACs. Our work shows that leveraging long-range links in graph construction for ViGs through our proposed LSGC can exceed the performance of current state-of-the-art ViGs. 
    more » « less
    Free, publicly-accessible full text available December 13, 2026
  9. The deployment of deep learning-based malware detection systems has transformed cybersecurity, offering sophisticated pattern recognition capabilities that surpass traditional signature-based approaches. However, these systems introduce new vulnerabilities requiring systematic investigation. This chapter examines adversarial attacks against graph neural network-based malware detection systems, focusing on semantics-preserving methodologies that evade detection while maintaining program functionality. We introduce a reinforcement learning (RL) framework that formulates the attack as a sequential decision making problem, optimizing the insertion of no-operation (NOP) instructions to manipulate graph structure without altering program behavior. Comparative analysis includes three baseline methods: random insertion, hill-climbing, and gradient-approximation attacks. Our experimental evaluation on real world malware datasets reveals significant differences in effectiveness, with the reinforcement learning approach achieving perfect evasion rates against both Graph Convolutional Network and Deep Graph Convolutional Neural Network architectures while requiring minimal program modifications. Our findings reveal three critical research gaps: transitioning from abstract Control Flow Graph representations to executable binary manipulation, developing universal vulnerability discovery across different architectures, and systematically translating adversarial insights into defensive enhancements. This work contributes to understanding adversarial vulnerabilities in graph-based security systems while establishing frameworks for evaluating machine learning-based malware detection robustness. 
    more » « less
    Free, publicly-accessible full text available December 1, 2026
  10. Free, publicly-accessible full text available December 8, 2026